902 research outputs found

    Method of Real-Time Principal-Component Analysis

    Get PDF
    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time

    Optimal Beamforming for Physical Layer Security in MISO Wireless Networks

    Get PDF
    A wireless network of multiple transmitter-user pairs overheard by an eavesdropper, where the transmitters are equipped with multiple antennas while the users and eavesdropper are equipped with a single antenna, is considered. At different levels of wireless channel knowledge, the problem of interest is beamforming to optimize the users' quality-of-service (QoS) in terms of their secrecy throughputs or maximize the network's energy efficiency under users' QoS. All these problems are seen as very difficult optimization problems with many nonconvex constraints and nonlinear equality constraints in beamforming vectors. The paper develops path-following computational procedures of low-complexity and rapid convergence for the optimal beamforming solution. Their practicability is demonstrated through numerical examples

    Real-Time Principal-Component Analysis

    Get PDF
    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer

    Real-time Optimal Resource Allocation for Embedded UAV Communication Systems

    Get PDF
    We consider device-to-device (D2D) wireless information and power transfer systems using an unmanned aerial vehicle (UAV) as a relay-assisted node. As the energy capacity and flight time of UAVs is limited, a significant issue in deploying UAV is to manage energy consumption in real-time application, which is proportional to the UAV transmit power. To tackle this important issue, we develop a real-time resource allocation algorithm for maximizing the energy efficiency by jointly optimizing the energy-harvesting time and power control for the considered (D2D) communication embedded with UAV. We demonstrate the effectiveness of the proposed algorithms as running time for solving them can be conducted in milliseconds.Comment: 11 pages, 5 figures, 1 table. This paper is accepted for publication on IEEE Wireless Communications Letter

    System and method for cognitive processing for data fusion

    Get PDF
    A system and method for cognitive processing of sensor data. A processor array receiving analog sensor data and having programmable interconnects, multiplication weights, and filters provides for adaptive learning in real-time. A static random access memory contains the programmable data for the processor array and the stored data is modified to provide for adaptive learning

    ENGLISH LEXICAL STRESS ASSIGNMENT BY EFL LEARNERS: INSIGHTS FROM A VIETNAMESE CONTEXT

    Get PDF
    In English, the accurate assignment of lexical stress is of paramount importance in attaining good pronunciation and speech intelligibility; however, it is by no means an easy task for many EFL learners, especially those whose first languages have no system of word stress. Vietnamese learners, for example, often face problems with the placement of lexical stress as their mother tongue is not a stress language but a tonal one. The current study was conducted to yield more insights into Vietnamese learners’ acquisition of word stress in this regard. Specifically, it was conducted to investigate (1) the extent to which Vietnamese learners were able to assign stress patterns in English multisyllabic words and (2) whether there was a statistically significant correlation between their competence in recognizing and in producing English lexical stress. Data for the study were gained from 45 elementary EFL learners studying English at a foreign language center in the Mekong Delta of Vietnam. The process of data collection started with assignment tests (i.e., a recognition test and a production test), followed by a comparative analysis of the participants’ performance on these tests and subsequently a retrospective interview. The results indicated that the participants’ overall level of competence in assigning stress in English words was just above average. It was also found that the participants performed the recognition test better than they did with the production test, and there were several factors contributing to this inconsistency. A positive correlation between the participants’ recognition and production of lexical stress patterns was also observed in this research.  Article visualizations

    AGENT-BASED MONITORING & MANAGEMENT SYSTEM: UNIVERSITI TEKNOLOGI PETRONAS (UTP) GRADUATE ASSISTANTSHIP CLAIM PROCESS

    Get PDF
    This project investigates the process of allowance claiming which is done monthly by Graduate Assistants (GA) in Universiti Teknologi PETRONAS (UTP) and that eventually leads to the development of a Web-based system called “Agent-based Monitoring & Management System: Universiti Teknologi PETRONAS (UTP) Graduate Assistance Claim Process” (GACMS) in order to digitalize each and every step involved in that process. The main objective is to overcome problems such as human error, manpower waste and inconvenience caused by the manual approach, which is currently used. Moreover, in order to enhance system’s capability, Multiple Agent Based (MAB) theory will be applied so that GACMS can be a smart personal assistant system that facilitates each step in the procedure. Prior to development, a comprehensive research was conducted within the GAs’ community to assess project’s necessity and received strong support from participants. Furthermore, it is necessary to emphasize that the project is developed using prototyping methodology for better alignment with dynamic change of requirements from users. Thus, it’s believed that GACMS, once implemented, will become a helpful platform to further boost up efficiency and productivity of allowance claiming process

    Pattern Recognition Algorithm for High-Sensitivity Odorant Detection in Unknown Environments

    Get PDF
    In a realistic odorant detection application environment, the collected sensory data is a mix of unknown chemicals with unknown concentrations and noise. The identification of the odorants among these mixtures is a challenge in data recognition. In addition, deriving their individual concentrations in the mix is also a challenge. A deterministic analytical model was developed to accurately identify odorants and calculate their concentrations in a mixture with noisy data

    Cascade Back-Propagation Learning in Neural Networks

    Get PDF
    The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights
    • …
    corecore